2,009 research outputs found

    CMS dashboard task monitoring: A user-centric monitoring view

    Get PDF
    We are now in a phase change of the CMS experiment where people are turning more intensely to physics analysis and away from construction. This brings a lot of challenging issues with respect to monitoring of the user analysis. The physicists must be able to monitor the execution status, application and grid-level messages of their tasks that may run at any site within the CMS Virtual Organisation. The CMS Dashboard Task Monitoring project provides this information towards individual analysis users by collecting and exposing a user-centric set of information regarding submitted tasks including reason of failure, distribution by site and over time, consumed time and efficiency. The development was user-driven with physicists invited to test the prototype in order to assemble further requirements and identify weaknesses with the application

    Adaptability as a Strategy of Self-Identity in the Modern Information Space

    Full text link
    The values and meanings for education of the new century information technologies dictate new teaching strategies, development and self-development. Innovative co-adaptation in a rapidly changing society is an active strategy for the development of the individual, due to cooperation, communication and creative activity.Ценности и смыслы, цели образования нового века информационных технологий диктуют новые стратегии обучения, развития и саморазвития личности. Инновационная ко-адаптация в условиях стремительно меняющегося социума – это активная стратегия развития личности, обусловленная сотрудничеством, коммуникативной и созидательной деятельностью

    Producing phrasal prominence in German

    Get PDF
    This study examines the relative change in a number of acoustic parameters usually associated with the production of prominences. The production of six German sentences under different question answer conditions provide de-accented and accented versions of the same words in broad and narrow focus. Normalised energy, F0, duration and spectral measures were found to form a stable hierarchy in their exponency of the three degrees of accentuation

    Data growth and its impact on the SCOP database: new developments

    Get PDF
    The Structural Classification of Proteins (SCOP) database is a comprehensive ordering of all proteins of known structure, according to their evolutionary and structural relationships. The SCOP hierarchy comprises the following levels: Species, Protein, Family, Superfamily, Fold and Class. While keeping the original classification scheme intact, we have changed the production of SCOP in order to cope with a rapid growth of new structural data and to facilitate the discovery of new protein relationships. We describe ongoing developments and new features implemented in SCOP. A new update protocol supports batch classification of new protein structures by their detected relationships at Family and Superfamily levels in contrast to our previous sequential handling of new structural data by release date. We introduce pre-SCOP, a preview of the SCOP developmental version that enables earlier access to the information on new relationships. We also discuss the impact of worldwide Structural Genomics initiatives, which are producing new protein structures at an increasing rate, on the rates of discovery and growth of protein families and superfamilies. SCOP can be accessed at http://scop.mrc-lmb.cam.ac.uk/scop

    An efficient strategy for the collection and storage of large volumes of data for computation

    Get PDF
    In recent years, there has been an increasing amount of data being produced and stored, which is known as Big Data. The social networks, internet of things, scientific experiments and commercial services play a significant role in generating a vast amount of data. Three main factors are important in Big Data; Volume, Velocity and Variety. One needs to consider all three factors when designing a platform to support Big Data. The Large Hadron Collider (LHC) particle accelerator at CERN consists of a number of data-intensive experiments, which are estimated to produce a volume of about 30 PB of data, annually. The velocity of these data that are propagated will be extremely fast. Traditional methods of collecting, storing and analysing data have become insufficient in managing the rapidly growing volume of data. Therefore, it is essential to have an efficient strategy to capture these data as they are produced. In this paper, a number of models are explored to understand what should be the best approach for collecting and storing Big Data for analytics. An evaluation of the performance of full execution cycles of these approaches on the monitoring of the Worldwide LHC Computing Grid (WLCG) infrastructure for collecting, storing and analysing data is presented. Moreover, the models discussed are applied to a community driven software solution, Apache Flume, to show how they can be integrated, seamlessly

    Search for a signal on intermediate baryon systems formation in hadron-nuclear and nuclear-nuclear interactions at high energies

    Full text link
    We have analyzed the behavior of different characteristics of hadron-nuclear and nuclear-nuclear interactions as a function of centrality to get a signal on the formation of intermediate baryon systems. We observed that the data demonstrate the regime change and saturation. The angular distributions of slow particles exhibit some structure in the above mentioned reactions at low energy. We believe that the structure could be connected with the formation and decay of the percolation cluster. With increasing the mass of colliding nuclei, the structure starts to become weak and almost disappears ultimately. This shows that the number of secondary internuclear interactions increases with increasing the mass of the colliding nuclei. The latter could be a reason of the disintegration of any intermediate formations as well as clusters, which decrease their influence on the angular distribution of the emitted particles.Comment: 2 pages and one figur

    Use of grid tools to support CMS distributed analysis

    Get PDF
    In order to prepare the Physics Technical Design Report, due by end of 2005, the CMS experiment needs to simulate, reconstruct and analyse about 100 million events, corresponding to more than 200 TB of data. The data will be distributed to several Computing Centres. In order to provide access to the whole data sample to all the world-wide dispersed physicists, CMS is developing a layer of software that uses the Grid tools provided by the LCG project to gain access to data and resources and that aims to provide a user friendly interface to the physicists submitting the analysis jobs. To achieve these aims CMS will use Grid tools from both the LCG-2 release and those being developed in the framework of the ARDA project. This work describes the current status and the future developments of the CMS analysis system
    corecore